Barren plateaus in quantum neural network training landscapes
نویسندگان
چکیده
منابع مشابه
Barren plateaus in quantum neural network training landscapes
Many experimental proposals for noisy intermediate scale quantum devices involve training a parameterized quantum circuit with a classical optimization loop. Such hybrid quantum-classical algorithms are popular for applications in quantum simulation, optimization, and machine learning. Due to its simplicity and hardware efficiency, random circuits are often proposed as initial guesses for explo...
متن کاملTraining a Quantum Neural Network
Quantum learning holds great promise for the field of machine intelligence. The most studied quantum learning algorithm is the quantum neural network. Many such models have been proposed, yet none has become a standard. In addition, these models usually leave out many details, often excluding how they intend to train their networks. This paper discusses one approach to the problem and what adva...
متن کاملTraining a classical weightless neural network in a quantum computer
The purpose of this paper is to investigate a new quantum learning algorithm for classical weightless neural networks. The learning algorithm creates a superposition of all possible neural network configurations for a given architecture. The performance of the network over the training set is stored entangled with neural configuration and quantum search is performed to amplify the probability a...
متن کاملQuantum anomalous Hall effect with higher plateaus.
The quantum anomalous Hall (QAH) effect in magnetic topological insulators is driven by the combination of spontaneous magnetic moments and spin-orbit coupling. Its recent experimental discovery raises the question if higher plateaus can also be realized. Here, we present a general theory for a QAH effect with higher Chern numbers and show by first-principles calculations that a thin film magne...
متن کاملIncremental Convolutional Neural Network Training
Experimenting novel ideas on deep convolutional neural networks (DCNNs) with big datasets is hampered by the fact that network training requires huge computational resources in the terms of CPU and GPU power and hours. One option is to downscale the problem, e.g., less classes and less samples, but this is undesirable with DCNNs whose performance is largely data-dependent. In this work, we take...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Nature Communications
سال: 2018
ISSN: 2041-1723
DOI: 10.1038/s41467-018-07090-4